38 research outputs found
Shallow decision-making analysis in General Video Game Playing
The General Video Game AI competitions have been the testing ground for
several techniques for game playing, such as evolutionary computation
techniques, tree search algorithms, hyper heuristic based or knowledge based
algorithms. So far the metrics used to evaluate the performance of agents have
been win ratio, game score and length of games. In this paper we provide a
wider set of metrics and a comparison method for evaluating and comparing
agents. The metrics and the comparison method give shallow introspection into
the agent's decision making process and they can be applied to any agent
regardless of its algorithmic nature. In this work, the metrics and the
comparison method are used to measure the impact of the terms that compose a
tree policy of an MCTS based agent, comparing with several baseline agents. The
results clearly show how promising such general approach is and how it can be
useful to understand the behaviour of an AI agent, in particular, how the
comparison with baseline agents can help understanding the shape of the agent
decision landscape. The presented metrics and comparison method represent a
step toward to more descriptive ways of logging and analysing agent's
behaviours
Ensemble decision systems for general video game playing
Ensemble Decision Systems offer a unique form of decision making that allows
a collection of algorithms to reason together about a problem. Each individual
algorithm has its own inherent strengths and weaknesses, and often it is
difficult to overcome the weaknesses while retaining the strengths. Instead of
altering the properties of the algorithm, the Ensemble Decision System augments
the performance with other algorithms that have complementing strengths. This
work outlines different options for building an Ensemble Decision System as
well as providing analysis on its performance compared to the individual
components of the system with interesting results, showing an increase in the
generality of the algorithms without significantly impeding performance.Comment: 8 Pages, Accepted at COG201
Multi-objective tree search approaches for general video game playing
The design of algorithms for Game AI agents usually focuses on the single objective of winning, or maximizing a given score. Even if the heuristic that guides the search (for reinforcement learning or evolutionary approaches) is composed of several factors, these typically provide a single numeric value (reward or fitness, respectively) to be optimized. Multi-Objective approaches are an alternative concept to face these problems, as they try to optimize several objectives, often contradictory, at the same time. This paper proposes for the first time a study of Multi-Objective approaches for General Video Game playing, where the game to be played is not known a priori by the agent. The experimental study described here compares several algorithms in this setting, and the results suggest that Multi-Objective approaches can perform even better than their single-objective counterparts
Rolling Horizon Coevolutionary planning for two-player video games
This paper describes a new algorithm for decision making in two-player real-time video games. As with Monte Carlo Tree Search, the algorithm can be used without heuristics and has been developed for use in general video game AI. The approach is to extend recent work on rolling horizon evolutionary planning, which has been shown to work well for single-player games, to two (or in principle many) player games. To select an action the algorithm co-evolves two (or in the general case N) populations, one for each player, where each individual is a sequence of actions for the respective player. The fitness of each individual is evaluated by playing it against a selection of action-sequences from the opposing population. When choosing an action to take in the game, the first action is chosen from the fittest member of the population for that player. The new algorithm is compared with a number of general video game AI algorithms on a two-player space battle game, with promising results
Rolling Horizon NEAT for General Video Game Playing
This paper presents a new Statistical Forward Planning (SFP) method, Rolling
Horizon NeuroEvolution of Augmenting Topologies (rhNEAT). Unlike traditional
Rolling Horizon Evolution, where an evolutionary algorithm is in charge of
evolving a sequence of actions, rhNEAT evolves weights and connections of a
neural network in real-time, planning several steps ahead before returning an
action to execute in the game. Different versions of the algorithm are explored
in a collection of 20 GVGAI games, and compared with other SFP methods and
state of the art results. Although results are overall not better than other
SFP methods, the nature of rhNEAT to adapt to changing game features has
allowed to establish new state of the art records in games that other methods
have traditionally struggled with. The algorithm proposed here is general and
introduces a new way of representing information within rolling horizon
evolution techniques.Comment: 8 pages, 5 figures, accepted for publication in IEEE Conference on
Games (CoG) 202